- Home
- Search Results
- Page 1 of 1
Search for: All records
-
Total Resources4
- Resource Type
-
0000000004000000
- More
- Availability
-
40
- Author / Contributor
- Filter by Author / Creator
-
-
Goldstein, Thomas (4)
-
Yan, Jun (2)
-
Aloimonos, Yiannis (1)
-
Banerjee, Arnab (1)
-
Baraniuk, Richard G. (1)
-
Burch, Kenneth S. (1)
-
Cava, Robert J. (1)
-
Chen, Shao-Yu (1)
-
Chengxi, Ye (1)
-
Evanusa, Matthew (1)
-
Fermuller, Cornelia (1)
-
He, Hua (1)
-
Ji, Huiwen (1)
-
Knolle, Johannes (1)
-
Lampen-Kelley, Paige (1)
-
Mandrus, David (1)
-
Mithrokin, Anton (1)
-
Motome, Yukitoshi (1)
-
Nagler, Stephen E. (1)
-
Nasu, Joji (1)
-
- Filter by Editor
-
-
null (1)
-
& Spizer, S. M. (0)
-
& . Spizer, S. (0)
-
& Ahn, J. (0)
-
& Bateiha, S. (0)
-
& Bosch, N. (0)
-
& Brennan K. (0)
-
& Brennan, K. (0)
-
& Chen, B. (0)
-
& Chen, Bodong (0)
-
& Drown, S. (0)
-
& Ferretti, F. (0)
-
& Higgins, A. (0)
-
& J. Peters (0)
-
& Kali, Y. (0)
-
& Ruiz-Arias, P.M. (0)
-
& S. Spitzer (0)
-
& Sahin. I. (0)
-
& Spitzer, S. (0)
-
& Spitzer, S.M. (0)
-
-
Have feedback or suggestions for a way to improve these results?
!
Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
Chengxi, Ye; Evanusa, Matthew; He, Hua; Mithrokin, Anton; Goldstein, Thomas; Yorke, James; Fermuller, Cornelia; Aloimonos, Yiannis (, ArXivorg)null (Ed.)Convolution is a central operation in Convolutional Neural Networks (CNNs), which applies a kernel to overlapping regions shifted across the image. However, because of the strong correlations in real-world image data, convolutional kernels are in effect re-learning redundant data. In this work, we show that this redundancy has made neural network training challenging, and propose network deconvolution, a procedure which optimally removes pixel-wise and channel-wise correlations before the data is fed into each layer. Network deconvolution can be efficiently calculated at a fraction of the computational cost of a convolution layer. We also show that the deconvolution filters in the first layer of the network resemble the center-surround structure found in biological neurons in the visual regions of the brain. Filtering with such kernels results in a sparse representation, a desired property that has been missing in the training of neural networks. Learning from the sparse representation promotes faster convergence and superior results without the use of batch normalization. We apply our network deconvolution operation to 10 modern neural network models by replacing batch normalization within each. Extensive experiments show that the network deconvolution operation is able to deliver performance improvement in all cases on the CIFAR-10, CIFAR-100, MNIST, Fashion-MNIST, Cityscapes, and ImageNet datasets.more » « less
-
Baraniuk, Richard G.; Goldstein, Thomas; Sankaranarayanan, Aswin C.; Studer, Christoph; Veeraraghavan, Ashok; Wakin, Michael B. (, IEEE Signal Processing Magazine)
-
Goldstein, Thomas; Wu, Yueh-Chun; Chen, Shao-Yu; Taniguchi, Takashi; Watanabe, Kenji; Varga, Kalman; Yan, Jun (, The Journal of Chemical Physics)
An official website of the United States government

Full Text Available